Inverting Monotonic Nonlinearities by Entropy Maximization
نویسندگان
چکیده
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
منابع مشابه
Non-monotonicity in probabilistic reasoning
We start by defining an approach to non-monotonic prob abilistic reasoning in terms of non-monotonic categorical reasoning. We identify a type of non-monotonic proba bilistic reasoning, akin to default inheritance, that seems to be commonly found in practice. We formulate this in terms of the Maximization of Conditional Independence (MCI), and identify a variety of applications for this sort ...
متن کاملNon-monotonic Poisson Likelihood Maximization
This report summarizes the theory and some main applications of a new non-monotonic algorithm for maximizing a Poisson Likelihood, which for Positron Emission Tomography (PET) is equivalent to minimizing the associated Kullback-Leibler Divergence, and for Transmission Tomography is similar to maximizing the dual of a maximum entropy problem. We call our method non-monotonic maximum likelihood (...
متن کاملSignal transcoding by nonlinear sensory neurons: information-entropy maximization, optimal transfer function, and anti-Hebbian adaptation.
A principle of information-entropy maximization is introduced in order to characterize the optimal representation of an arbitrarily varying quantity by a neural output confined to a finite interval. We then study the conditions under which a neuron can effectively fulfil the requirements imposed by this information-theoretic optimal principle. We show that this can be achieved with the natural ...
متن کاملAdaptive Control of Uncertain Hammerstein Systems with Non-Monotonic Input Nonlinearities Using Auxiliary Blocking Nonlinearities
We extend retrospective cost adaptive control (RCAC) with auxiliary nonlinearities to command following for uncertain Hammerstein systems with non-monotonic input nonlinearities. We assume that only one Markov parameter of the linear plant is known and that the non-monotonic input nonlinearity is uncertain. Auxiliary nonlinearities are used within RCAC to account for the non-monotonic input non...
متن کاملMaxallent: Maximizers of all entropies and uncertainty of uncertainty
The entropy maximum approach (Maxent) was developed as a minimization of the subjective uncertainty measured by the Boltzmann–Gibbs–Shannon entropy. Many new entropies have been invented in the second half of the 20th century. Now there exists a rich choice of entropies for fitting needs. This diversity of entropies gave rise to a Maxent ‘‘anarchism’’. The Maxent approach is now the conditional...
متن کامل